Goto

Collaborating Authors

 health-care worker


Characters for good, created by artificial intelligence

#artificialintelligence

As it becomes easier to create hyper-realistic digital characters using artificial intelligence, much of the conversation around these tools has centered on misleading and potentially dangerous deepfake content. But the technology can also be used for positive purposes -- to revive Albert Einstein to teach a physics class, talk through a career change with your older self, or anonymize people while preserving facial communication. To encourage the technology's positive possibilities, MIT Media Lab researchers and their collaborators at the University of California at Santa Barbara and Osaka University have compiled an open-source, easy-to-use character generation pipeline that combines AI models for facial gestures, voice, and motion and can be used to create a variety of audio and video outputs. The pipeline also marks the resulting output with a traceable, as well as human-readable, watermark to distinguish it from authentic video content and to show how it was generated -- an addition to help prevent its malicious use. By making this pipeline easily available, the researchers hope to inspire teachers, students, and health-care workers to explore how such tools can help them in their respective fields. If more students, educators, health-care workers, and therapists have a chance to build and use these characters, the results could improve health and well-being and contribute to personalized education, the researchers write in Nature Machine Intelligence.


Artificial Intelligence Ethics Approved by 193 Countries

#artificialintelligence

PARIS, France, December 1, 2021 (ENS) – The first global agreement on the ethics of artificial intelligence, AI, was adopted Thursday by 193 countries. All the member states of the UN Educational, Scientific and Cultural Organization, UNESCO, adopted the historic agreement that defines the common values and principles needed to ensure the healthy development of AI. "The world needs rules for artificial intelligence to benefit humanity," said UNESCO Director-General Audrey Azoulay. "The Recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving states the responsibility to apply it at their level. UNESCO will support its 193 member states in its implementation and ask them to report regularly on their progress and practices."


Advances in application of Artificial Intelligence

#artificialintelligence

Scientists have recorded major breakthroughs in the application of Artificial Intelligence (AI) in health, weather forecasting and other areas of science. Scientists at the University of Houston's (UH's) Air Quality Forecasting and Modeling Lab have developed a new artificial intelligence system that could lead to improved ways to control high ozone problems and even contribute to solutions for climate change issues. The breakthrough, published online in the scientific journal, Scientific Reports-Nature, showed ozone levels in the earth's troposphere (the lowest level of the atmosphere) can now be forecast with accuracy up to two weeks in advance, a remarkable improvement over current systems that can accurately predict ozone levels only three days ahead. Professor of atmospheric chemistry and AI deep learning at UH's College of Natural Sciences and Mathematics, Yunsoo Choi, said: "This was very challenging. Nobody had done this previously. I believe we are the first to try to forecast surface ozone levels two weeks in advance."


WHO Issues First Global Report In Artificial Intelligence In Health

#artificialintelligence

Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to the World Health Organization (WHO). WHO's report, Ethics and governance of artificial intelligence for health, is the result of two years of consultations held by a panel of international experts appointed by WHO. "Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm," said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. "This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls." Artificial intelligence can be, and in some wealthy countries is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.


WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use

#artificialintelligence

Artificial Intelligence (AI) holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use, according to new WHO guidance published today. The report, Ethics and governance of artificial intelligence for health, is the result of 2 years of consultations held by a panel of international experts appointed by WHO. “Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.” Artificial intelligence can be, and in some wealthy countries is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management. AI could also empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.However, WHO’s new report cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.       For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control. The report also emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings. AI systems should therefore be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients. Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment.  Six principles to ensure AI works for the public interest in all countriesTo limit the risks and maximize the opportunities intrinsic to the use of AI for health, WHO provides the following principles as the basis for AI regulation and governance:Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available. Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used. Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes. Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.          These principles will guide future WHO work to support efforts to ensure that the full potential of AI for healthcare and public health will be used for the benefits of all.


Boston Dynamics' Spot the robot dog remotely measuring patients' vitals amid coronavirus pandemic

Boston Herald

Spot the robot dog is ready to see you now for your contact-free vitals. Researchers from MIT and Brigham and Women's Hospital are exploring a new way to lower the risk for health-care workers amid the coronavirus pandemic -- by using Boston Dynamics' Spot the robot dog to remotely measure patients' vital signs. "In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs," Henwei Huang, an MIT postdoctoral researcher, said in a statement. "We thought it should be possible for us to use a robot to remove the health-care worker from the risk of directly exposing themselves to the patient." Using four cameras mounted on the dog-like robot, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate and blood oxygen saturation in healthy patients.


App promises to improve pain management in dementia patients

#artificialintelligence

University of Alberta computing scientists are developing an app to aid health-care staff to assess and manage pain in patients suffering from dementia and other neurodegenerative diseases. "The challenge with understanding pain in patients with dementia is that the expressions of pain in these individuals are often mistaken for psychiatric problems," said Eleni Stroulia, professor in the Department of Computing Science and co-lead on the project. "So we asked, how can we use technology to better understand the pain of people with dementia?" Along with Stroulia, the project is led by Thomas Hadjistavropoulos at the University of Regina as part of AGE-WELL, one of Canada's Networks of Centres of Excellence. The app will serve to digitize a pen-and-paper observational checklist that past research has shown helps health-care workers such as nurses when assessing pain in their patients suffering from dementia.